In human communication we only feel the need to give sufficient reason for our decisions or actions; we not unpack the firing of neorones in our brains. This principle can be applied in explainable AI, where it is tempting to try to explain the minutiae of, say, every node and link in a deep neural network. Instead, we should seek explanations at an appropriate level for the task at hand.